Motion-sensitive cortex and motion semantics in American Sign Language

نویسندگان

  • Stephen McCullough
  • Ayse Pinar Saygin
  • Franco Korpics
  • Karen Emmorey
چکیده

Previous research indicates that motion-sensitive brain regions are engaged when comprehending motion semantics expressed by words or sentences. Using fMRI, we investigated whether such neural modulation can occur when the linguistic signal itself is visually dynamic and motion semantics is expressed by movements of the hands. Deaf and hearing users of American Sign Language (ASL) were presented with signed sentences that conveyed motion semantics ("The deer walked along the hillside.") or were static, conveying little or no motion ("The deer slept along the hillside."); sentences were matched for the amount of visual motion. Motion-sensitive visual areas (MT+) were localized individually in each participant. As a control, the Fusiform Face Area (FFA) was also localized for the deaf participants. The whole-brain analysis revealed static (locative) sentences engaged regions in left parietal cortex more than motion sentences, replicating previous results implicating these regions in comprehending spatial language for sign languages. Greater activation was observed in the functionally defined MT+ ROI for motion than static sentences for both deaf and hearing signers. No modulation of neural activity by sentence type was observed in the FFA. Deafness did not affect modulation of MT+ by motion semantics, but hearing signers exhibited stronger neural activity in MT+ for both sentence types, perhaps due to differences in exposure and/or use of ASL. We conclude that top down modulation of motion-sensitive cortex by linguistic semantics is not disrupted by the visual motion that is present in sign language sentences.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Applying mean shift and motion detection approaches to hand tracking in sign language

Hand gesture recognition is very important to communicate in sign language. In this paper, an effective object tracking and hand gesture recognition method is proposed. This method is combination of two well-known approaches, the mean shift and the motion detection algorithm. The mean shift algorithm can track objects based on the color, then when hand passes the face occlusion happens. Several...

متن کامل

The Biology of Linguistic Expression Impacts Neural Correlates for Spatial Language

Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/mot...

متن کامل

Comparing the Effects of Auditory Deprivation and Sign Language within the Auditory and Visual Cortex

To investigate neural plasticity resulting from early auditory deprivation and use of American Sign Language, we measured responses to visual stimuli in deaf signers, hearing signers, and hearing nonsigners using functional magnetic resonance imaging. We examined "compensatory hypertrophy" (changes in the responsivity/size of visual cortical areas) and "cross-modal plasticity" (changes in audit...

متن کامل

Modulation of BOLD Response in Motion-sensitive Lateral Temporal Cortex by Real and Fictive Motion Sentences

Can linguistic semantics affect neural processing in feature-specific visual regions? Specifically, when we hear a sentence describing a situation that includes motion, do we engage neural processes that are part of the visual perception of motion? How about if a motion verb was used figuratively, not literally? We used fMRI to investigate whether semantic content can "penetrate" and modulate n...

متن کامل

Interpreting American Sign Language with Kinect

Accurate and real-time machine translation of sign language has the potential to significantly improve communication between hearing impaired people and those who do not understand sign language. Previous research studies on computer recognition of sign language have taken input from technology including motion-capturing gloves and computer vision combined with colored gloves. These projects co...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • NeuroImage

دوره 63 1  شماره 

صفحات  -

تاریخ انتشار 2012